User-generated-content (UGC) videos have dominated the Internet during recent years. While many methods attempt to objectively assess the quality of these UGC videos, the mechanisms of human quality perception in the UGC-VQA problem is still yet to be explored. To better explain the quality perception mechanisms and learn more robust representations, we aim to disentangle the effects of aesthetic quality issues and technical quality issues risen by the complicated video generation processes in the UGC-VQA problem. To overcome the absence of respective supervisions during disentanglement, we propose the Limited View Biased Supervisions (LVBS) scheme where two separate evaluators are trained with decomposed views specifically designed for each issue. Composed of an Aesthetic Quality Evaluator (AQE) and a Technical Quality Evaluator (TQE) under the LVBS scheme, the proposed Disentangled Objective Video Quality Evaluator (DOVER) reach excellent performance (0.91 SRCC for KoNViD-1k, 0.89 SRCC for LSVQ, 0.88 SRCC for YouTube-UGC) in the UGC-VQA problem. More importantly, our blind subjective studies prove that the separate evaluators in DOVER can effectively match human perception on respective disentangled quality issues. Codes and demos are released in https://github.com/teowu/dover.
translated by 谷歌翻译
当前的深度视频质量评估(VQA)方法通常在评估高分辨率视频时具有高计算成本。这使他们无法通过端到端培训学习更好的视频质量相关表示。现有方法通常考虑幼稚的采样以降低计算成本,例如调整大小和裁剪。但是,它们显然在视频中损坏了与质量相关的信息,因此并不是学习VQA的良好表示形式的最佳选择。因此,渴望为VQA设计一种新的质量保留抽样方案。在本文中,我们提出了网格迷你斑点采样(GMS),该采样允许通过在原始分辨率下采样贴片来考虑局部质量,并通过以统一网格采样的迷你绘制来涵盖全球质量。这些迷你斑点是剪接和对齐的,称为片段。我们进一步构建了专门设计的碎片注意网络(粉丝),以适应碎片作为输入。由片段和粉丝组成,VQA(快速VQA)提出的片段样品变压器可实现有效的端到端深VQA,并学习有效的与视频质量相关的表示。它可以提高最新准确性约10%,同时减少1080p高分辨率视频的99.5%的失败。新学习的与视频质量相关的表示形式也可以转移到较小的VQA数据集中,从而在这些情况下提高性能。广泛的实验表明,Fast-VQA在各种分辨率的输入方面具有良好的性能,同时保持高效率。我们在https://github.com/timothyhtimothy/fast-vqa上发布代码。
translated by 谷歌翻译
在现有作品中,框架及其对视频质量评估(VQA)的影响之间的时间关系仍然不足。这些关系导致视频质量的两种重要效果类型。首先,某些时间变化(例如摇动,闪烁和突然的场景过渡)会导致时间扭曲并导致额外的质量降解,而其他变化(例如,与有意义的事件相关的变化)却没有。其次,人类视觉系统通常对具有不同内容的框架有不同的关注,从而导致其对整体视频质量的重要性不同。基于变压器的突出时间序列建模能力,我们提出了一种新颖有效的基于变压器的VQA方法来解决这两个问题。为了更好地区分时间变化,从而捕获了时间变形,我们设计了一个基于变压器的时空扭曲提取(STDE)模块。为了解决时间质量的关注,我们提出了类似编码器的时间含量变压器(TCT)。我们还介绍了功能上的时间抽样,以减少TCT的输入长度,以提高该模块的学习效率和效率。由STDE和TCT组成,用于视频质量评估(DISCOVQA)的拟议的时间失真符合变压器(DISCOVQA)在几个VQA基准上达到了最新的性能,而无需任何额外的预训练数据集,多达10%的概括能力提高了10%比现有方法。我们还进行了广泛的消融实验,以证明我们提出的模型中每个部分的有效性,并提供可视化以证明所提出的模块实现了我们对这些时间问题进行建模的意图。我们将在以后发布我们的代码和预算权重。
translated by 谷歌翻译
在肺结节的管理中,我们希望根据其在计算机断层扫描(CT)扫描的直径变化方面预测结节的演变,然后根据结节不断增长的趋势的预测结果提供后续建议。为了提高肺结节增长趋势预测的性能,与连续CT扫描中相同结节的变化进行比较至关重要。在此激励的情况下,我们从国家肺筛查试验(NLST)数据集进行了两次以上的CT扫描,筛选了4,666名受试者,以组织一个名为NLSTT的颞数据集。在具体上,我们首先检测并配对感兴趣的区域(ROI),该区域涵盖了基于注册的CT扫描的相同结节。之后,我们通过模型预测结节的纹理类别和直径大小。最后,我们根据直径的变化来注释每个结节的演化类别。基于构建的NLSTT数据集,我们建议一个暹罗编码器同时利用从连续的CT扫描中检测到的3D ROI的判别特征。然后,我们在新小时设计一个时空混合器(STM)来利用连续3D ROI中同一结节的间隔变化,并捕获结节区域的空间依赖性和当前的3D ROI。根据临床诊断常规,我们采用层次损失来更多地关注生长的结节。我们有组织的数据集上的广泛实验证明了我们提出的方法的优势。我们还对内部数据集进行了实验,以通过将其与熟练的临床医生进行比较来评估我们方法的临床实用性。
translated by 谷歌翻译
诸如说服力之类的复杂对话设置涉及交流态度或行为的变化,因此即使与主题没有直接相关,用户的观点也需要解决。在这项工作中,我们贡献了一个新颖的模块化对话系统框架,该框架将事实信息和社会内容无缝地整合到有说服力的对话中。我们的框架可以推广到任何混合社交和任务内容的对话任务。我们进行了一项研究,将用户对框架的评估与基线端到端生成模型进行了比较。我们发现,与没有明确处理社交内容或事实问题的端到端模型相比,我们的框架在包括能力和友善的各个方面更受欢迎。
translated by 谷歌翻译
In the field of cross-modal retrieval, single encoder models tend to perform better than dual encoder models, but they suffer from high latency and low throughput. In this paper, we present a dual encoder model called BagFormer that utilizes a cross modal interaction mechanism to improve recall performance without sacrificing latency and throughput. BagFormer achieves this through the use of bag-wise interactions, which allow for the transformation of text to a more appropriate granularity and the incorporation of entity knowledge into the model. Our experiments demonstrate that BagFormer is able to achieve results comparable to state-of-the-art single encoder models in cross-modal retrieval tasks, while also offering efficient training and inference with 20.72 times lower latency and 25.74 times higher throughput.
translated by 谷歌翻译
The past few years have witnessed the prevalence of self-supervised representation learning within the language and 2D vision communities. However, such advancements have not been fully migrated to the community of 3D point cloud learning. Different from previous pre-training pipelines for 3D point clouds that generally fall into the scope of either generative modeling or contrastive learning, in this paper, we investigate a translative pre-training paradigm, namely PointVST, driven by a novel self-supervised pretext task of cross-modal translation from an input 3D object point cloud to its diverse forms of 2D rendered images (e.g., silhouette, depth, contour). Specifically, we begin with deducing view-conditioned point-wise embeddings via the insertion of the viewpoint indicator, and then adaptively aggregate a view-specific global codeword, which is further fed into the subsequent 2D convolutional translation heads for image generation. We conduct extensive experiments on common task scenarios of 3D shape analysis, where our PointVST shows consistent and prominent performance superiority over current state-of-the-art methods under diverse evaluation protocols. Our code will be made publicly available.
translated by 谷歌翻译
This paper utilizes an anomaly detection algorithm to check if underwater gliders are operating normally in the unknown ocean environment. Glider pilots can be warned of the detected glider anomaly in real time, thus taking over the glider appropriately and avoiding further damage to the glider. The adopted algorithm is validated by two valuable sets of data in real glider deployments, the University of South Florida (USF) glider Stella and the Skidaway Institute of Oceanography (SkIO) glider Angus.
translated by 谷歌翻译
Blind watermarking provides powerful evidence for copyright protection, image authentication, and tampering identification. However, it remains a challenge to design a watermarking model with high imperceptibility and robustness against strong noise attacks. To resolve this issue, we present a framework Combining the Invertible and Non-invertible (CIN) mechanisms. The CIN is composed of the invertible part to achieve high imperceptibility and the non-invertible part to strengthen the robustness against strong noise attacks. For the invertible part, we develop a diffusion and extraction module (DEM) and a fusion and split module (FSM) to embed and extract watermarks symmetrically in an invertible way. For the non-invertible part, we introduce a non-invertible attention-based module (NIAM) and the noise-specific selection module (NSM) to solve the asymmetric extraction under a strong noise attack. Extensive experiments demonstrate that our framework outperforms the current state-of-the-art methods of imperceptibility and robustness significantly. Our framework can achieve an average of 99.99% accuracy and 67.66 dB PSNR under noise-free conditions, while 96.64% and 39.28 dB combined strong noise attacks. The code will be available in https://github.com/rmpku/CIN.
translated by 谷歌翻译
Our situated environment is full of uncertainty and highly dynamic, thus hindering the widespread adoption of machine-led Intelligent Decision-Making (IDM) in real world scenarios. This means IDM should have the capability of continuously learning new skills and efficiently generalizing across wider applications. IDM benefits from any new approaches and theoretical breakthroughs that exhibit Artificial General Intelligence (AGI) breaking the barriers between tasks and applications. Recent research has well-examined neural architecture, Transformer, as a backbone foundation model and its generalization to various tasks, including computer vision, natural language processing, and reinforcement learning. We therefore argue that a foundation decision model (FDM) can be established by formulating various decision-making tasks as a sequence decoding task using the Transformer architecture; this would be a promising solution to advance the applications of IDM in more complex real world tasks. In this paper, we elaborate on how a foundation decision model improves the efficiency and generalization of IDM. We also discuss potential applications of a FDM in multi-agent game AI, production scheduling, and robotics tasks. Finally, through a case study, we demonstrate our realization of the FDM, DigitalBrain (DB1) with 1.2 billion parameters, which achieves human-level performance over 453 tasks, including text generation, images caption, video games playing, robotic control, and traveling salesman problems. As a foundation decision model, DB1 would be a baby step towards more autonomous and efficient real world IDM applications.
translated by 谷歌翻译